Last Update: 2025/3/26
Anthropic Messages API
The Anthropic Messages API allows you to generate conversational responses using Anthropic's language models. This document provides an overview of the API endpoints, request parameters, and response structure.
Endpoint
POST https://platform.llmprovider.ai/v1/messages
Request Headers
Header | Value |
---|---|
x-api-key | YOUR_API_KEY |
anthropic-version | 2023-06-01 |
Content-Type | application/json |
Request Body
The request body should be a JSON object with the following parameters:
Parameter | Type | Description |
---|---|---|
model | string | The model to use (e.g., claude-3-opus-20240229 ). |
messages | string[] | A list of message objects. |
messages.role | string | The role of the message sender (user , assistant ). |
messages.content | string | The content of the message. |
max_tokens | integer | (Optional) The maximum number of tokens to generate before stopping. |
metadata | object | (Optional) An object describing metadata about the request. |
stop_sequences | array | (Optional) Custom text sequences that will cause the model to stop generating. |
stream | boolean | (Optional)Whether to incrementally stream the response using server-sent events. |
system | string | (Optional) A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. |
temperature | number | (Optional) Amount of randomness injected into the response, between 0 and 1. |
tool_choices | object | (Optional) How the model should use the provided tools. The model can use a specific tool, any available tool, or decide by itself. |
tools | object[] | (Optional) Definitions of tools that the model may use. |
top_k | number | (Optional) Only sample from the top K options for each subsequent token. Required range: x > 0 |
top_p | number | (Optional) Use nucleus sampling. Required range:0 < x < 1 |
Example Request
{
"model": "claude-3-5-sonnet-20241022",
"messages": [
{
"role": "user",
"content": "Hello, Claude!"
}
],
"max_tokens": 1024
}
Response Body
The response body will be a JSON object containing the generated response and metadata.
Field | Type | Description |
---|---|---|
id | string | Unique identifier for the message. |
model | string | The model that handled the request. |
role | string | Conversational role of the generated message. This will always be assistant . |
content | array | Array of content objects containing the response. |
content.type | string | The type of content, usually text . |
content.text | string | The text content of the response. |
stop_reason | string | The reason for stopping the response generation. |
stop_sequences | string | The stop sequences that caused the model to stop generating. |
type | string | The type of object returned, usually message . |
usage | object | Token usage statistics for the request. |
Example Response
{
"id": "msg_123abc",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello! How can I help you today?"
}
],
"model": "claude-3-5-sonnet-20241022",
"usage": {
"input_tokens": 10,
"output_tokens": 8
}
}
Example Request
- Shell
- nodejs
- python
curl -X POST https://platform.llmprovider.ai/v1/messages \
-H "x-api-key: $YOUR_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/messages';
const data = {
model: 'claude-3-5-sonnet-20241022',
messages: [
{
role: 'user',
content: 'Hello!'
}
]
};
const headers = {
'x-api-key': apiKey,
'anthropic-version': '2023-06-01',
'Content-Type': 'application/json'
};
axios.post(url, data, { headers })
.then(response => {
console.log('Response:', response.data);
})
.catch(error => {
console.error('Error:', error);
});
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/messages'
headers = {
'x-api-key': api_key,
'anthropic-version': '2023-06-01',
'Content-Type': 'application/json'
}
data = {
'model': 'claude-3-5-sonnet-20241022',
'messages': [
{
'role': 'user',
'content': 'Hello!'
}
]
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
print('Response:', response.json())
else:
print('Error:', response.status_code, response.text)
For more details, refer to the Anthropic API documentation.